7 research outputs found

    CD/CV: Blockchain-based schemes for continuous verifiability and traceability of IoT data for edge-fog-cloud

    Get PDF
    This paper presents a continuous delivery/continuous verifiability (CD/CV) method for IoT dataflows in edge¿fog¿cloud. A CD model based on extraction, transformation, and load (ETL) mechanism as well as a directed acyclic graph (DAG) construction, enable end-users to create efficient schemes for the continuous verification and validation of the execution of applications in edge¿fog¿cloud infrastructures. This scheme also verifies and validates established execution sequences and the integrity of digital assets. CV model converts ETL and DAG into business model, smart contracts in a private blockchain for the automatic and transparent registration of transactions performed by each application in workflows/pipelines created by CD model without altering applications nor edge¿fog¿cloud workflows. This model ensures that IoT dataflows delivers verifiable information for organizations to conduct critical decision-making processes with certainty. A containerized parallelism model solves portability issues and reduces/compensates the overhead produced by CD/CV operations. We developed and implemented a prototype to create CD/CV schemes, which were evaluated in a case study where user mobility information is used to identify interest points, patterns, and maps. The experimental evaluation revealed the efficiency of CD/CV to register the transactions performed in IoT dataflows through edge¿fog¿cloud in a private blockchain network in comparison with state-of-art solutions.This work has been partially supported by the project “CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones” S2018/TCS-4423 from Madrid Regional Government, Spain and by the Spanish Ministry of Science and Innovation Project “New Data Intensive Computing Methods for High-End and Edge Computing Platforms (DECIDE)”. Ref. PID2019-107858GB-I00; and by the project 41756 “Plataforma tecnológica para la gestión, aseguramiento, intercambio preservación de grandes volúmenes de datos en salud construcción de un repositorio nacional de servicios de análisis de datos de salud” by the PRONACES-CONACYT, Mexic

    A data integrity verification service for cloud storage based on building blocks

    Get PDF
    Cloud storage is a popular solution for organizations and users to store data in ubiquitous and cost-effective manner. However, violations of confidentiality and integrity are still issues associated to this technology. In this context, there is a need for tools that enable organizations/users to verify the integrity of their information stored in cloud services. In this paper, we present the design and implementation of an efficient service based on provable data possession cryptographic model, which enables organizations to verify, on-demand, the data integrity without retrieving files from the cloud. The storage and cryptographic components have been developed in the form of building blocks, which are deployed on the user-side using the Manager/Worker pattern that favors exploiting parallelism when executing data possession challenges. An experimental evaluation in a private cloud revealed the efficacy of launching integrity verification challenges to cloud storage services and the feasibility of applying containerized task parallel scheme that significantly improves the performance of the data possession proof service in real-world scenarios in comparison with the implementation of the original possession data proof scheme.This work has been partially funded by GRANT Fondo Sectorial Mexican Space Agency-CONACYT Num. 262891 and by EU under the COST programme Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)

    A wot-based method for creating digital sentinel twins of iot devices

    Get PDF
    The data produced by sensors of IoT devices are becoming keystones for organizations to conduct critical decision-making processes. However, delivering information to these processes in real-time represents two challenges for the organizations: the first one is achieving a constant dataflow from IoT to the cloud and the second one is enabling decision-making processes to retrieve data from dataflows in real-time. This paper presents a cloud-based Web of Things method for creating digital twins of IoT devices (named sentinels).The novelty of the proposed approach is that sentinels create an abstract window for decision-making processes to: (a) find data (e.g., properties, events, and data from sensors of IoT devices) or (b) invoke functions (e.g., actions and tasks) from physical devices (PD), as well as from virtual devices (VD). In this approach, the applications and services of decision-making processes deal with sentinels instead of managing complex details associated with the PDs, VDs, and cloud computing infrastructures. A prototype based on the proposed method was implemented to conduct a case study based on a blockchain system for verifying contract violation in sensors used in product transportation logistics. The evaluation showed the effectiveness of sentinels enabling organizations to attain data from IoT sensors and the dataflows used by decision-making processes to convert these data into useful information.This research was partially funded by the project Num.41756 “Plataforma tecnológica para la gestión, aseguramiento, intercambio y preservación de grandes volúmenes de datos en salud y construcción de un repositorio nacional de servicios de análisis de datos de salud” by FORDECYT-PRONACES, Conacyt (México

    A Blockchain and Fingerprinting Traceability Method for Digital Product Lifecycle Management

    No full text
    The rise of digitalization, sensory devices, cloud computing and internet of things (IoT) technologies enables the design of novel digital product lifecycle management (DPLM) applications for use cases such as manufacturing and delivery of digital products. The verification of the accomplishment/violations of agreements defined in digital contracts is a key task in digital business transactions. However, this verification represents a challenge when validating both the integrity of digital product content and the transactions performed during multiple stages of the DPLM. This paper presents a traceability method for DPLM based on the integration of online and offline verification mechanisms based on blockchain and fingerprinting, respectively. A blockchain lifecycle registration model is used for organizations to register the exchange of digital products in the cloud with partners and/or consumers throughout the DPLM stages as well as to verify the accomplishment of agreements at each DPLM stage. The fingerprinting scheme is used for offline verification of digital product integrity and to register the DPLM logs within digital products, which is useful in either dispute or violation of agreements scenarios. We built a DPLM service prototype based on this method, which was implemented as a cloud computing service. A case study based on the DPLM of audios was conducted to evaluate this prototype. The experimental evaluation revealed the ability of this method to be applied to DPLM in real scenarios in an efficient manner

    A novel transversal processing model to build environmental big data services in the cloud

    No full text
    This paper presents a novel transversal, agnostic-infrastructure, and generic processing model to build environmental big data services in the cloud. Transversality is used for building processing structures (PS) by reusing/coupling multiple existent software for processing environmental monitoring, climate, and earth observation data, even in execution time, with datasets available in cloud-based repositories. Infrastructure-agnosticism is used for deploying/executing PSs on/in edge, fog, and/or cloud. Genericity is used to embed analytic, merging information, machine learning, and statistic micro-services into PSs for automatically and transparently converting PSs into big data services to support decision-making procedures. A prototype was developed for conducting case studies based on the data climate classification, earth observation products, and making predictions of air data pollution by merging different monitoring climate data sources. The experimental evaluation revealed the efficacy and flexibility of this model to create complex environmental big data services.This work has been partially supported by the project 41756 "Plataforma tecnológica para la gestión, aseguramiento, intercambio y preservación de grandes volúmenes de datos en salud y construcción de un repositorio nacional de servicios de análisis de datos de salud" by the FORDECYT-PRONACES

    Exploiting lexical patterns for knowledge graph construction from unstructured text in Spanish

    No full text
    Abstract Knowledge graphs (KGs) are useful data structures for the integration, retrieval, dissemination, and inference of information in various information domains. One of the main challenges in building KGs is the extraction of named entities (nodes) and their relations (edges), particularly when processing unstructured text as it has no semantic descriptions. Generating KGs from texts written in Spanish represents a research challenge as the existing structures, models, and strategies designed for other languages are not compatible in this scenario. This paper proposes a method to design and construct KGs from unstructured text in Spanish. We defined lexical patterns to extract named entities and (non) taxonomic, equivalence, and composition relations. Next, named entities are linked and enriched with DBpedia resources through a strategy based on SPARQL queries. Finally, OWL properties are defined from the predicate relations for creating resource description framework (RDF) triples. We evaluated the performance of the proposed method to determine the degree of elements extracted from the input text and to assess their quality through standard information retrieval measures. The evaluation revealed the feasibility of the proposed method to extract RDF triples from datasets in general and computer science domains. Competitive results were observed by comparing our method regarding an existing approach from the literature
    corecore